Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
以前的工作定义了探索性抓握,其中一个机器人迭代地抓住并丢弃一个未知的复杂多面体物体,以发现一组稳定的掌握对象的每个识别的不同稳定的姿势。最近的工作用来了一个多武装强盗模型,每种姿势一小组候选麦克风;但是,对于具有少数成功Grasps的物体,该组可能不包括最强大的掌握。我们展示了学习高效的掌握装置(腿),这是一种算法,可以通过构建大型有希望的掌握的小型活跃的掌握,并使用学习的信心范围来确定何时何时置信,它可以停止探索对象。实验表明,腿可以比不学习活动集的现有算法更有效地识别高质量的掌握。在仿真实验中,我们测量腿部和基线所识别的最佳掌握的成功概率与真正最强大的掌握的最佳差距。经过3000个探索步骤后,腿部优于14个Dex-Net对手的10个中的基线算法和39 egad的25个!对象。然后,我们开发一个自我监督的掌握系统,机器人探讨了人类干预最小的掌握。 3对象的物理实验表明,腿将从基线收敛到高性能的GRASPS比基线更快。有关补充材料和视频,请参阅\ url {https://sites.google.com/view/legs-exp-grasping}。
translated by 谷歌翻译
纵向模式广泛使用智能手机相机,以提供增强的摄影体验。应用于在纵向模式下捕获的图像的主要效果之一是合成浅景深(DOF)。合成的DOF(或Bokeh效应)在图像中选择性地熔断区域,以模拟使用具有宽孔径的大透镜的效果。此外,许多应用程序现在包含一个新的图像运动属性(NIMAT)来模拟背景运动,其中运动与每个像素处的估计深度相关。在这项工作中,我们通过在纵向模式下引入模糊综合过程的修改来遵循渲染NIMAT效果的趋势。特别地,我们的修改通过施加旋转模糊的核来实现来自单个图像的多视图散景的高质量合成。鉴于合成的多视图,我们可以生成类似于NIMAT效果的美学上的现实图像运动。与原始NIMAT效应和其他类似图像动作相比,我们验证了我们的方法,如Facebook 3D图像。我们的图像运动演示了一个平滑的图像视图过渡,物体边界周围的伪像较少。
translated by 谷歌翻译
我们提出了一种将多个图像对准和融合到单个视图中的框架,该框架使用神经图像表示(NIRS),也称为基于隐式或基于坐标的神经表示。我们的框架针对突发图像,展示摄像机自我运动和场景中的潜在变化。根据现场运动的性质,我们描述了不同的对齐策略 - 即,透视平面(即,配备),具有最小场景变化的光流,以及带有显着遮挡和脱离的光流。利用神经图像表示,我们的框架有效地将多个输入组合成单个规范视图,而无需选择其中一个图像作为参考帧。我们演示了如何使用此多帧融合框架进行各种图层分离任务。
translated by 谷歌翻译
成像传感器在10-12位的动态范围内将传入场景光数字化(即1024--4096色调值)。然后将传感器图像加工在相机上,最后量化为仅8位(即256个音调值),以符合普遍的编码标准。有许多重要的应用程序,例如高位深度显示和照片编辑,有利于恢复丢失的位深度。深度神经网络在该位深度重建任务中是有效的。给定量化的低位深度图像作为输入,现有的深度学习方法采用单次方法,该方法尝试直接估计高位深度图像,或(2)直接估计高的剩余物 - 和低位深度图像。相比之下,我们提出了一种培训和推理策略,可以恢复剩余图像位平平面。我们的BitPlane-Wise学习框架具有允许在训练期间进行多级监督的优势,并且能够使用简单的网络架构获得最先进的结果。我们在多个图像数据集上广泛地测试了我们提出的方法,并在以前的方法上证明了0.5db至2.3db psnr的改进,这取决于量化水平。
translated by 谷歌翻译
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译
We consider the problem of finding an accurate representation of neuron shapes, extracting sub-cellular features, and classifying neurons based on neuron shapes. In neuroscience research, the skeleton representation is often used as a compact and abstract representation of neuron shapes. However, existing methods are limited to getting and analyzing "curve" skeletons which can only be applied for tubular shapes. This paper presents a 3D neuron morphology analysis method for more general and complex neuron shapes. First, we introduce the concept of skeleton mesh to represent general neuron shapes and propose a novel method for computing mesh representations from 3D surface point clouds. A skeleton graph is then obtained from skeleton mesh and is used to extract sub-cellular features. Finally, an unsupervised learning method is used to embed the skeleton graph for neuron classification. Extensive experiment results are provided and demonstrate the robustness of our method to analyze neuron morphology.
translated by 谷歌翻译
SchNetPack is a versatile neural networks toolbox that addresses both the requirements of method development and application of atomistic machine learning. Version 2.0 comes with an improved data pipeline, modules for equivariant neural networks as well as a PyTorch implementation of molecular dynamics. An optional integration with PyTorch Lightning and the Hydra configuration framework powers a flexible command-line interface. This makes SchNetPack 2.0 easily extendable with custom code and ready for complex training task such as generation of 3d molecular structures.
translated by 谷歌翻译
The NASA Astrophysics Data System (ADS) is an essential tool for researchers that allows them to explore the astronomy and astrophysics scientific literature, but it has yet to exploit recent advances in natural language processing. At ADASS 2021, we introduced astroBERT, a machine learning language model tailored to the text used in astronomy papers in ADS. In this work we: - announce the first public release of the astroBERT language model; - show how astroBERT improves over existing public language models on astrophysics specific tasks; - and detail how ADS plans to harness the unique structure of scientific papers, the citation graph and citation context, to further improve astroBERT.
translated by 谷歌翻译